Enhanced Balancing of Bias-Variance Tradeoff in Stochastic Estimation: A Minimax Perspective

نویسندگان

چکیده

In “Enhanced Balancing of Bias-Variance Tradeoff in Stochastic Estimation: A Minimax Perspective”, the authors study a framework to construct new classes stochastic estimators that can consistently beat existing benchmarks regardless key model parameter values. Oftentimes biased estimators, such as finite-difference black box gradient estimation, require selection tuning parameters balance bias and variance ultimately minimize overall errors. Unfortunately, this relies on knowledge is unknown priori thus leads ad hoc choices practice. The introduce notion called asymptotic minimax risk ratio, which designed compare against benchmarks, whose values less than one imply could asymptotically outperform value. Based this, an outperforming weighting scheme by explicitly analyzing ratio via tractable reformulation nonconvex optimization problem.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Minimax Estimation of a Variance

The nonparametric problem of estimating a variance based on a sample of size n from a univariate distribution which has a known bounded range but is otherwise arbitrary is treated. For squared error loss, a certain linear function of the sample variance is seen to be minimax for each n from 2 through 13, except n = 4. For squared error loss weighted by the reciprocal of the variance, a constant...

متن کامل

Conceptual complexity and the bias/variance tradeoff.

In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At o...

متن کامل

Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff

Bandit convex optimization is one of the fundamental problems in the field of online learning. The best algorithm for the general bandit convex optimization problem guarantees a regret of e O(T 5/6), while the best known lower bound is ⌦(T 1/2). Many attempts have been made to bridge the huge gap between these bounds. A particularly interesting special case of this problem assumes that the loss...

متن کامل

The Bias-Variance Tradeoff and the Randomized GACV

We propose a new in-sample cross validation based method (randomized GACV) for choosing smoothing or bandwidth parameters that govern the bias-variance or fit-complexity tradeoff in 'soft' classification. Soft classification refers to a learning procedure which estimates the probability that an example with a given attribute vector is in class 1 vs class O. The target for optimizing the the tra...

متن کامل

Bias-Variance Tradeoffs in Recombination Rate Estimation.

In 2013, we and coauthors published a paper characterizing rates of recombination within the 2.1-megabase garnet-scalloped (g-sd) region of the Drosophila melanogaster X chromosome. To extract the signal of recombination in our high-throughput sequence data, we adopted a nonparametric smoothing procedure, reducing variance at the cost of biasing individual recombination rates. In doing so, we s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Operations Research

سال: 2022

ISSN: ['1526-5463', '0030-364X']

DOI: https://doi.org/10.1287/opre.2022.2319